Americus
Evaluating Loss Functions for Graph Neural Networks: Towards Pretraining and Generalization
Abbas, Khushnood, Hou, Ruizhe, Wengang, Zhou, Shi, Dong, Ling, Niu, Nan, Satyaki, Abbasi, Alireza
Graph Neural Networks (GNNs) became useful for learning on non-Euclidean data. However, their best performance depends on choosing the right model architecture and the training objective, also called the loss function. Researchers have studied these parts separately, but a large-scale evaluation has not looked at how GNN models and many loss functions work together across different tasks. To fix this, we ran a thorough study - it included seven well-known GNN architectures. We also used a large group of 30 single plus mixed loss functions. The study looked at both inductive and transductive settings. Our evaluation spanned three distinct real-world datasets, assessing performance in both inductive and transductive settings using 21 comprehensive evaluation metrics. From these extensive results (detailed in supplementary information 1 \& 2), we meticulously analyzed the top ten model-loss combinations for each metric based on their average rank. Our findings reveal that, especially for the inductive case: 1) Hybrid loss functions generally yield superior and more robust performance compared to single loss functions, indicating the benefit of multi-objective optimization. 2) The GIN architecture always showed the highest-level average performance, especially with Cross-Entropy loss. 3) Although some combinations had overall lower average ranks, models such as GAT, particularly with certain hybrid losses, demonstrated incredible specialized strengths, maximizing the most top-1 results among the individual metrics, emphasizing subtle strengths for particular task demands. 4) On the other hand, the MPNN architecture typically lagged behind the scenarios it was tested against.
- Asia > China > Henan Province (0.13)
- Oceania > Australia > Australian Capital Territory > Canberra (0.04)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- (7 more...)
- Law Enforcement & Public Safety (0.45)
- Banking & Finance > Trading (0.31)
The Rise Of Machines That Think
This week's milestones in the history of technology include the end of life of one of the first examples of artificial intelligence or "giant brains" and its 50th anniversary, patents for the transistor, xerography, and carbon paper, and the first solar-powered mobile phone. At 11:45pm, the power to the Electronic Numerical Integrator and Computer (ENIAC), is removed. For a few years after it started calculating in 1946, it was "the only fully electronic computer working in the U.S." Thomas Haigh, Mark Priestley and Crispin Rope write in ENIAC in Action: Making and Remaking the Modern Computer: Since 1955, When ENIAC punched its last card, its prominence has only grown… ENIAC was as much symbol as machine, producing cultural meanings as well as numbers… In its own small way, ENIAC has returned frequently to the forefront of public awareness over the decades as a symbol of a variety of virtues and vices. Among other things, the ENIAC was a symbol of the computer as a giant brain (see October 8 entry below), giving rise to today's warnings that artificial intelligence "will be able to do everything better than us." Walter H. Brattain and John Bardeen are granted a patent for a three-electrode circuit element utilizing semiconductive materials, otherwise known as the transistor.